Imaging Systems|524 Article(s)
High-Dynamic-Range Ptychography Using Maximum Likelihood Noise Estimation
Wenjie Li, Honggang Gu, Li Liu, Lei Zhong, Yu Zhou, and Shiyuan Liu
As crucial constraints of ptychography, the richness and accuracy of diffraction patterns directly affect the quality of reconstruction images. This paper proposes a high-dynamic-range ptychography using maximum likelihood noise estimation (ML-HDR). Herein, assuming the linear response of the detector, a compound Gaussian noise model is established; the weight function is optimized according to the ML estimation; and a high signal-to-noise ratio diffraction pattern is further synthesized from multiple low dynamic range diffraction patterns. The reconstruction quality of single exposure, conventional HDR, and ML-HDR is compared. The simulation and experiment results show that ML-HDR can widen the dynamic range by 8 bits and enhance the reconstruction resolution by 2.83 times compared with the single exposure. Moreover, compared with conventional HDR, ML-HDR can enhance the contrast and uniformity of the reconstruction image in the absence of additional hardware parameters.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811011 (2024)
Analysis of Optimal Spatial Resolution Capability of Pulse-Dilation Framing Camera
Huan Chen, Yanli Bai, Heng Li, and Haiying Gao
The pulse-dilation framing camera, with a short magnetic focus, is a two-dimensional, ultrafast diagnostic device with a long drift region. It evaluates the paraxial spatial resolution and detection area by the point spatial resolution of the on-axis and off-axis, respectively. However, because of the spatial nonuniformity of the Gaussian image plane caused by the field curvature, the overall spatial resolution of the camera is difficult to evaluate. Therefore, this study proposes a new method to quantify the spatial resolution of a pulse-dilation framing camera. The proposed method is based on a model constructed using the COMSOL software. In this model, the three-dimensional imaging surface is reconstructed based on the characteristics of the field curvature. The degree of deviation between the imaging surface and the Gaussian image plane is analyzed by standard deviation (SD), and the spatial resolution of the Gaussian image plane is obtained by combining the point spatial resolution and the overall modulation. The spatial resolution uniformity of the Gaussian image plane is quantified using relative error. The results of our study show that, when the lens aperture is 200 mm, slit width is 10 mm, axial width is 100 mm, length of drift region is 400 mm, imaging radius is 21 mm, and the cathode voltage is -3.75 kV, with the change in magnetic field, the degree of deviation between the imaging surface and the Gaussian image plane, and the spatial resolution of the Gaussian image plane both have an upward parabolic shape. When the imaging magnetic field is 41.97 Gs (1 Gs=10-4 T), the SD of the deviation of the two image planes is minimized to 2.82 mm, the spatial resolution of the Gaussian image plane is optimal at 292.80 m, and the modulation difference characterizing the spatial uniformity is minimized to 330%. In conclusion, this study proposes a quantifiable reference method for evaluating the optimal spatial resolution performance of a pulse-dilation framing camera with a short magnetic focus.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811010 (2024)
Jamming of Supercontinuum Spectrum Laser on Imaging Systems in Different Backgrounds
Yu Fan, Xiangzheng Cheng, Ming Shao, and Wei Liu
The study of the jamming effects on visible-light imaging systems irradiated by supercontinuum spectrum lasers has vast potential applications. Focusing on the jamming effect of the supercontinuum spectrum laser, experiments were conducted to analyze its interference on visible-light imaging systems under varying radiation brightness backgrounds. A white-light fiber laser was utilized to generate a supercontinuum spectrum interference source, and an experimental system was constructed to evaluate the jamming effects of the supercontinuum spectrum laser on visible-light imaging systems. Jamming threshold data for detectors at different irradiance intensities were obtained, along with a mathematical relationship model between the detector's saturation pixel number and the jamming laser's power density. Results indicate that the detector's saturation pixel number is approximately logarithmically linear in relation to the interference laser' power density. Additionally, the visible-light imaging system is more vulnerable to interference when operating under low-irradiance backgrounds. These experimental findings provide valuable insights for designing, demonstrating, and operating supercontinuum spectrum laser jamming equipment.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811009 (2024)
Reconstruction-Free Object Recognition Scheme in Lensless Imaging Systems
Kaiyu Chen, Ying Li, Zhengdai Li, and Youming Guo
Lensless imaging systems use masks instead of lenses, reducing costs and making equipment lighter. However, before object recognition, reconstructing an image is necessary. This reconstruction involves parameter tuning and time-consuming calculations. Hence, a reconstruction-free object recognition scheme, which directly trains networks to recognize objects on encoded images captured via lensless cameras, that saves computing resources and protects privacy, is proposed herein. Using lensless cameras with a phase mask and an amplitude mask, the real MNIST dataset is collected and the simulated MNIST and Fashion MNIST datasets are generated. Subsequently, the ResNet-50 and Swin_T networks are trained on these datasets for object recognition. The results show that with respect to the simulated MNIST, Fashion MNIST, and real MNIST datasets, the highest recognition accuracy achieved by the proposed scheme is 99.51%, 92.31%, and 98.06%, respectively. These accuracies are comparable to those achieved by the reconstructed object recognition scheme, proving that the proposed scheme is an efficient end-to-end scheme that provides privacy protection. Moreover, the proposed scheme is verified using two types of masks and two types of conventional backbone classification networks.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811008 (2024)
High-Precision Camera Calibration Method Based on Subpixel Edge Detection and Circularity Correction Compensation
Chaohai Kang, Li Hong, Weijian Ren, and Fengcai Huo
During camera calibration process based on a circular calibration board, when the lens distortion is large, the image quality decreases, and the circular projection edge becomes blurry, resulting in calibration errors. Accordingly, a high-precision camera technique based on subpixel edge detection and center correction compensation is proposed. First, the Canny-Zernike moment method was used to extract subpixel-level circular feature contour points. Moreover, edge point chains were used to connect independent contour points to obtain closed-loop accurate feature contours, enhancing the ability to extract fuzzy edge contours. Second, the inner and outer contours were sampled separately, and points were taken to fit the ellipse. The mean of the two centers was used as the feature points, and an ordered feature point set was obtained through the three-point judgment sorting method for rough calibration. Finally, the rough calibration parameters were used to calibrate the sampling point set of the contour, and the center coordinates were reobtained before backprojection onto the image for precise camera calibration, achieving center correction compensation calibration under distortion. The experimental results show that the proposed method effectively improves the accuracy of camera calibration when the lens distortion is large.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811006 (2024)
Point Cloud 3D Object Detection Based on Improved SECOND Algorithm
Ying Zhang, Liangliang Jiang, Dongbo Zhang, Wanlin Duan, and Yue Sun
Rapid identification and precise positioning of surrounding targets are prerequisites and represent the foundation for safe autonomous vehicle driving. A point cloud 3D object detection algorithm based on an improved SECOND algorithm is proposed to address the challenges of inaccurate recognition and positioning in voxel-based point cloud 3D object detection methods. First, an adaptive spatial feature fusion module is introduced into a 2D convolutional backbone network to fuse spatial features of different scales, so as to improve the model's feature expression capability. Second, by fully utilizing the correlation between bounding box parameters, the three-dimensional distance-intersection over union (3D DIoU) is adopted as the bounding box localization regression loss function, thus improving regression task efficiency. Finally, considering both the classification confidence and positioning accuracy of candidate boxes, a new candidate box quality evaluation standard is utilized to obtain smoother regression results. Experimental results on the KITTI test set demonstrate that the 3D detection accuracy of the proposed algorithm is superior to many previous algorithms. Compared with the SECOND benchmark algorithm, the car and cyclist classes improves by 2.86 and 3.84 percentage points, respectively, under simple difficulty; 2.99 and 3.89 percentage points, respectively, under medium difficulty; and 7.06 and 4.27 percentage points, respectively, under difficult difficulty.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811005 (2024)
Point Cloud Segmentation Algorithm Based on Density Awareness and Self-Attention Mechanism
Bin Lu, Yawei Liu, Yuhang Zhang, and Zhenyu Yang
We propose a 3D point cloud semantic segmentation algorithm based on density awareness and self-attention mechanism to address the issue of insufficient utilization of inter point density information and spatial location features in existing 3D point cloud semantic segmentation algorithms. First, based on the adaptive K-Nearest Neighbor (KNN) algorithm and local density position encoding, a density awareness convolutional module is constructed to effectively extract key density information between points, enhance the depth of information expression of initial input features, and enhance the algorithm's ability to capture local features. Then, a spatial feature self-attention module is constructed to enhance the correlation between global contextual information and spatial location information based on self-attention and spatial-attention mechanisms. The global and local features are effectively aggregated to extract deeper contextual features, enhancing the segmentation performance of the algorithm. Finally, extensive experiments are conducted on the public S3DIS dataset and ScanNet dataset. The experimental results show that the mean intersection over union of our algorithm reaches 69.11% and 72.52%, respectively, shows significant improvement compared with other algorithms, verifying the proposed algorithm has good segmentation and generalization performances.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811004 (2024)
High-Resolution Ptychography Method to Solve the Limitation Problem of Numerical Aperture and Pixel Size
Jingyi Zhang, Zihao Pei, Youyou Hu, Zhongming Yang, and Jiantai Dou
Imaging resolution of the ptychography is limited by the numerical aperture and CCD (charge coupled device) pixel size. When the CCD target surface is limited, the numerical aperture is limited, and the high-frequency information of the edge of the CCD target surface is easy to miss under the condition that the collected spot is large. In addition, the larger pixel size leads to insufficient sampling rate during imaging, and some detailed high-frequency information will be lost. We propose a high-resolution ptychography method, which can simultaneously solve the resolution problem limited by the numerical aperture and CCD pixel size. First, the extrapolation method is used to supplement the higher-order diffraction information lost due to the limited numerical aperture, and the image reconstructed by the extrapolation method is substituted into the generative adversarial network based on the multi-weight loss function, which can quickly solve the problem of pixel size limitation and improve the imaging resolution. The multi-weight loss function is the weighted sum of mean square error, feature map error and adversarial error. By setting reasonable weights, the pixel and visual level can be balanced. The simulation and experimental results show that this method has a significant effect on improving the resolution of the ptychography and has high computational efficiency.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811003 (2024)
Three Dimensional Reconstruction Algorithm of Unmanned Aerial Vehicle Images Based on Parallel Processing
Huaiyuan Chen, Jianwu Dang, Biao Yue, and Jingyu Yang
A parallelizable incremental structure from motion (SFM) recovery reconstruction algorithm is employed to address low efficiency and susceptibility to scene drift when reconstructing large-scale unmanned aerial vehicle image datasets. First, the vocabulary tree image retrieval results are used to constrain the spatial search range and improve the efficiency of image feature matching. Second, by considering the feature matching number and the global positioning system (GPS) information obtained by the drone platform, an undirected weighted scene map is constructed, and a normalized cut algorithm is selected to divide the scene map into multiple overlapping subsets. Further, each subset is distributed on multicore central processing units (CPUs), and the incremental SFM reconstruction algorithm is executed in parallel. Finally, based on the strategy of common reconstruction points between subsets and priority merging of strongly correlated subsets, subset merging is achieved. In addition, combining GPS information to add positional constraints to the beam adjustment (BA) cost function eliminates the errors introduced by each BA optimization execution. To verify the effectiveness of the algorithm, experiments are conducted on three unmanned aerial vehicle datasets. The experimental results show that the proposed algorithm not only significantly improves the efficiency of pose estimation and scene reconstruction compared with the original incremental SFM reconstruction algorithm but also reasonably optimizes the accuracy of the reconstruction results.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811002 (2024)
Laser Location of Mobile Robot Based on ICP Algorithm
Longyun Zhao, Hongjun San, Jiupeng Chen, and Zhen Peng
This study presents a localization method based on the improved iterative closest point (ICP) algorithm to solve the localization problems of mobile robots, such as low positioning accuracy and poor real-time positioning, in traditional 2D environments. The algorithm begins by establishing a pose search space, which systematically explored layer-by-layer, transitioning from lower to higher resolutions. To accelerate the search process and eliminate nonoptimal poses, partial point cloud scanning matching was executed synergistically with multipoint cloud density. Adoption of the frame-to-image method by the point cloud matching enabled the effective utilization of historical frame information. Further enhancements in positioning accuracy were achieved through the sparse matrix pose optimization for obtained optimal pose. Tests conducted on the SLAM Benchmark dataset show that the proposed algorithm is considerably more efficient, boasting a 1.8‒4.9 times efficiency gain over the popular Cartographer algorithm, and has less translation error. Real-world tests conducted on Turtlebot2 reveal that the proposed method exhibits substantially fewer positioning errors than Cartographer and Gmapping, showing superior real-time performance. Compared with the traditional adaptive Monte Carlo relocation (AMCL), the proposed method reduces mean translation errors by 0.035 m and mean rotation errors by 0.001 rad, resulting in higher relocation accuracy.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0811001 (2024)